Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
1.
International IEEE/EMBS Conference on Neural Engineering, NER ; 2023-April, 2023.
Article in English | Scopus | ID: covidwho-20243641

ABSTRACT

This study proposes a graph convolutional neural networks (GCN) architecture for fusion of radiological imaging and non-imaging tabular electronic health records (EHR) for the purpose of clinical event prediction. We focused on a cohort of hospitalized patients with positive RT-PCR test for COVID-19 and developed GCN based models to predict three dependent clinical events (discharge from hospital, admission into ICU, and mortality) using demographics, billing codes for procedures and diagnoses and chest X-rays. We hypothesized that the two-fold learning opportunity provided by the GCN is ideal for fusion of imaging information and tabular data as node and edge features, respectively. Our experiments indicate the validity of our hypothesis where GCN based predictive models outperform single modality and traditional fusion models. We compared the proposed models against two variations of imaging-based models, including DenseNet-121 architecture with learnable classification layers and Random Forest classifiers using disease severity score estimated by pre-trained convolutional neural network. GCN based model outperforms both imaging-only methods. We also validated our models on an external dataset where GCN showed valuable generalization capabilities. We noticed that edge-formation function can be adapted even after training the GCN model without limiting application scope of the model. Our models take advantage of this fact for generalization to external data. © 2023 IEEE.

2.
17th European Conference on Computer Vision, ECCV 2022 ; 13807 LNCS:605-620, 2023.
Article in English | Scopus | ID: covidwho-2251896

ABSTRACT

Successful data representation is a fundamental factor in machine learning based medical imaging analysis. Deep Learning (DL) has taken an essential role in robust representation learning. However, the inability of deep models to generalize to unseen data can quickly overfit intricate patterns. Thereby, the importance of implementing strategies to aid deep models in discovering useful priors from data to learn their intrinsic properties. Our model, which we call a dual role network (DRN), uses a dependency maximization approach based on Least Squared Mutual Information (LSMI). LSMI leverages dependency measures to ensure representation invariance and local smoothness. While prior works have used information theory dependency measures like mutual information, these are known to be computationally expensive due to the density estimation step. In contrast, our proposed DRN with LSMI formulation does not require the density estimation step and can be used as an alternative to approximate mutual information. Experiments on the CT based COVID-19 Detection and COVID-19 Severity Detection Challenges of the 2nd COV19D competition [24] demonstrate the effectiveness of our method compared to the baseline method of such competition. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

3.
6th International Conference on Advances in Computing and Data Sciences, ICACDS 2022 ; 1614 CCIS:112-123, 2022.
Article in English | Scopus | ID: covidwho-2013955

ABSTRACT

Amidst the increasing surge of Covid-19 infections worldwide, chest X-ray (CXR) imaging data have been found incredibly helpful for the fast screening of COVID-19 patients. This has been particularly helpful in resolving the overcapacity situation in the urgent care center and emergency department. An accurate Covid-19 detection algorithm can further aid this effort to reduce the disease burden. As part of this study, we put forward WE-Net, an ensemble deep learning (DL) framework for detecting pulmonary manifestations of COVID-19 from CXRs. We incorporated lung segmentation using U-Net to identify the thoracic Region of Interest (RoI), which was further utilized to train DL models to learn from relevant features. ImageNet based pre-trained DL models were fine-tuned, trained, and evaluated on the publicly available CXR collections. Ensemble methods like stacked generalization, voting, averaging, and the weighted average were used to combine predictions from best-performing models. The purpose of incorporating ensemble techniques is to overcome some of the challenges, such as generalization errors encountered due to noise and training on a small number of data sets. Experimental evaluations concluded on significant improvement in performance using the deep fusion neural network, i.e., the WE-Net model, which led to 99.02% accuracy and 0.989 area under the curve (AUC) in detecting COVID-19 from CXRs. The combined use of image segmentation, pre-trained DL models, and ensemble learning (EL) boosted the prediction results. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

SELECTION OF CITATIONS
SEARCH DETAIL